# Large parameter scale

Qwen3 235B A22B 8bit
Apache-2.0
This model is an 8-bit quantized version converted from Qwen/Qwen3-235B-A22B, suitable for text generation tasks.
Large Language Model
Q
mlx-community
477
2
Gemma 3 4b It Uncensored DBL X Int2 Quantized
Pre-trained model based on the Transformers library, suitable for natural language processing tasks
Large Language Model Transformers
G
Kfjjdjdjdhdhd
39
1
Codellama 13b Python Hf
Code Llama is a series of pre-trained and fine-tuned generative text models released by Meta with parameter scales ranging from 7 billion to 34 billion. This model is the 13-billion-parameter Python specialized version
Large Language Model Transformers Other
C
meta-llama
636
7
Hebrew Gemma 11B Instruct
Other
A version fine-tuned with multi-turn dialogue datasets based on the Hebrew-Gemma-11B generative text model
Large Language Model Transformers Supports Multiple Languages
H
yam-peleg
2,670
23
Llama 65b Instruct
A 65B-parameter instruction-tuned large language model developed by Upstage based on the LLaMA architecture, supporting long-text processing
Large Language Model Transformers English
L
upstage
144
14
Polycoder 0.4B
PolyCoder is a large language model with 400 million parameters, specifically designed for code generation and understanding tasks.
Large Language Model Transformers
P
NinedayWang
177
5
Codegen 16B Mono
Bsd-3-clause
CodeGen-Mono 16B is an autoregressive language model for program synthesis, specializing in generating executable code from English prompts.
Large Language Model Transformers
C
Salesforce
227
126
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase